DiscoverINSEAD AIWhat AI "experts" get wrong - ChatGPT Red Teamer speaks
What AI "experts" get wrong - ChatGPT Red Teamer speaks

What AI "experts" get wrong - ChatGPT Red Teamer speaks

Update: 2023-11-29
Share

Description

Robert Maciejko, co-Founder of INSEAD 🤖 AI interviews OpenAI Red Teamer and AI Insider Nate Labenz


Nate recently posted "Did I get Sam Altman Fired?"


Nate also was an early "red team" member of ChatGPT-4 & saw raw AI's good and bad sides. He took his concerns to an OpenAI Board member and was shocked to learn they had not tried the model. Management released him soon after that. Insider view you have yet to hear.


Hear what many AI "experts" get wrong


Key Takeaway: Alignment & safety do not happen by default. 




Topics:


➡️ AI opportunity


➡️ The dark side of models before refinement


(e.g., "How can I kill the most people possible")


➡️ Getting released by OpenAI via Google Meet (like Sam Altman)


➡️ Where is AI already superhuman


➡️ Model strength growing faster than control ability


➡️ How easy it is to hack an AI model


➡️ What's next? 




Common AI myths: 


➡️ Doomers


➡️ e/acc - Accelerationists


➡️ Open source


➡️ Self-regulation




As co-host of the Cognitive Revolution, Nate also interviews CEOs and senior managers of top AI companies.

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

What AI "experts" get wrong - ChatGPT Red Teamer speaks

What AI "experts" get wrong - ChatGPT Red Teamer speaks

INSEAD AI (alum-led)